187 research outputs found

    HNT-AI:An Automatic Segmentation Framework for Head and Neck Primary Tumors and Lymph Nodes in FDG- PET/CT Images

    Get PDF
    Head and neck cancer is one of the most prevalent cancers in the world. Automatic delineation of primary tumors and lymph nodes is important for cancer diagnosis and treatment. In this paper, we develop a deep learning-based model for automatic tumor segmentation, HNT-AI, using PET/CT images provided by the MICCAI 2022 Head and Neck Tumor (HECKTOR) segmentation Challenge. We investigate the effect of residual blocks, squeeze-and-excitation normalization, and grid-attention gates on the performance of 3D-UNET. We project the predicted masks on the z-axis and apply k-means clustering to reduce the number of false positive predictions. Our proposed HNT-AI segmentation framework achieves an aggregated dice score of 0.774 and 0.759 for primary tumors and lymph nodes, respectively, on the unseen external test set. Qualitative analysis of the predicted segmentation masks shows that the predicted segmentation mask tends to follow the high standardized uptake value (SUV) area on the PET scans more closely than the ground truth masks. The largest tumor volume, the larget lymph node volume, and the total number of lymph nodes derived from the segmentation proved to be potential biomarkers for recurrence-free survival with a C-index of 0.627 on the test set

    Synthetic data of simulated microcalcification clusters to train and explain deep learning detection models in contrast-enhanced mammography

    Get PDF
    Deep learning (DL) models can be trained on contrast-enhanced mammography (CEM) images to detect and classify lesions in the breast. As they often put more emphasis on the masses enhanced in the recombined image, they can fail in recognizing microcalcification clusters since these are hardly enhanced and are mainly visible in the (processed) lowenergy image. Therefore, we developed a method to create synthetic data with simulated microcalcification clusters to be used for data augmentation and explainability studies when training DL models. At first 3-dimensional voxel models of simulated microcalcification clusters based on descriptors of the shape and structure were constructed. In a set of 500 simulated microcalcification clusters the range of the size and of the number of microcalcifications per cluster followed the distribution of real clusters. The insertion of these clusters in real images of non-delineated CEM cases was evaluated by radiologists. The realism score was acceptable for single view applications. Radiologists could more easily categorize synthetic clusters into benign versus malignant than real clusters. In a second phase of the work, the role of synthetic data for training and/or explaining DL models was explored. A Mask R-CNN model was trained with synthetic CEM images containing microcalcification clusters. After a training run of 100 epochs the model was found to overfit on a training set of 192 images. In an evaluation with multiple test sets, it was found that this high level of sensitivity was due to the model being capable of recognizing the image rather than the cluster. Synthetic data could be applied for more tests, such as the impact of particular features in both background and lesion models

    Precision-medicine-toolbox: An open-source python package for facilitation of quantitative medical imaging and radiomics analysis

    Full text link
    [en] Medical image analysis plays a key role in precision medicine as it allows the clinicians to identify anatomical abnormalities and it is routinely used in clinical assessment. Data curation and pre-processing of medical images are critical steps in the quantitative medical image analysis that can have a significant impact on the resulting model performance. In this paper, we introduce a precision-medicine-toolbox that allows researchers to perform data curation, image pre-processing and handcrafted radiomics extraction (via Pyradiomics) and feature exploration tasks with Python. With this open-source solution, we aim to address the data preparation and exploration problem, bridge the gap between the currently existing packages, and improve the reproducibility of quantitative medical imaging research

    MSCDA: Multi-level Semantic-guided Contrast Improves Unsupervised Domain Adaptation for Breast MRI Segmentation in Small Datasets

    Full text link
    Deep learning (DL) applied to breast tissue segmentation in magnetic resonance imaging (MRI) has received increased attention in the last decade, however, the domain shift which arises from different vendors, acquisition protocols, and biological heterogeneity, remains an important but challenging obstacle on the path towards clinical implementation. In this paper, we propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation (MSCDA) framework to address this issue in an unsupervised manner. Our approach incorporates self-training with contrastive learning to align feature representations between domains. In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts to better exploit the underlying semantic information of the image at different levels. To resolve the data imbalance problem, we utilize a category-wise cross-domain sampling strategy to sample anchors from target images and build a hybrid memory bank to store samples from source images. We have validated MSCDA with a challenging task of cross-domain breast MRI segmentation between datasets of healthy volunteers and invasive breast cancer patients. Extensive experiments show that MSCDA effectively improves the model's feature alignment capabilities between domains, outperforming state-of-the-art methods. Furthermore, the framework is shown to be label-efficient, achieving good performance with a smaller source dataset. The code is publicly available at \url{https://github.com/ShengKuangCN/MSCDA}.Comment: 17 pages, 8 figure

    Predicting Adverse Radiation Effects in Brain Tumors After Stereotactic Radiotherapy With Deep Learning and Handcrafted Radiomics

    Full text link
    Introduction There is a cumulative risk of 20-40% of developing brain metastases (BM) in solid cancers. Stereotactic radiotherapy (SRT) enables the application of high focal doses of radiation to a volume and is often used for BM treatment. However, SRT can cause adverse radiation effects (ARE), such as radiation necrosis, which sometimes cause irreversible damage to the brain. It is therefore of clinical interest to identify patients at a high risk of developing ARE. We hypothesized that models trained with radiomics features, deep learning (DL) features, and patient characteristics or their combination can predict ARE risk in patients with BM before SRT. Methods Gadolinium-enhanced T1-weighted MRIs and characteristics from patients treated with SRT for BM were collected for a training and testing cohort (N = 1,404) and a validation cohort (N = 237) from a separate institute. From each lesion in the training set, radiomics features were extracted and used to train an extreme gradient boosting (XGBoost) model. A DL model was trained on the same cohort to make a separate prediction and to extract the last layer of features. Different models using XGBoost were built using only radiomics features, DL features, and patient characteristics or a combination of them. Evaluation was performed using the area under the curve (AUC) of the receiver operating characteristic curve on the external dataset. Predictions for individual lesions and per patient developing ARE were investigated. Results The best-performing XGBoost model on a lesion level was trained on a combination of radiomics features and DL features (AUC of 0.71 and recall of 0.80). On a patient level, a combination of radiomics features, DL features, and patient characteristics obtained the best performance (AUC of 0.72 and recall of 0.84). The DL model achieved an AUC of 0.64 and recall of 0.85 per lesion and an AUC of 0.70 and recall of 0.60 per patient. Conclusion Machine learning models built on radiomics features and DL features extracted from BM combined with patient characteristics show potential to predict ARE at the patient and lesion levels. These models could be used in clinical decision making, informing patients on their risk of ARE and allowing physicians to opt for different therapies
    • …
    corecore